Virtual camera system

Part of a series on:
Video game graphics

A virtual camera system aims at controlling a camera or a set of cameras to display a view of a 3D virtual world. Camera systems are used in videogames where their purpose is to show the action at the best possible angle; more generally, they are used in 3D virtual worlds when a third person view is required.

As opposed to film makers, virtual camera system creators have to deal with a world that is interactive and unpredictable. It is not possible to know where the player's character is going to be in the next few seconds; therefore, it is not possible to plan the shots as a film maker would do. To solve this issue, the system relies on certain rules or artificial intelligence to select the most appropriate shots.

There are mainly three types of camera systems. In fixed camera systems, the camera does not move at all and the system displays the player's character in a succession of still shots. Tracking cameras, on the other hand, follow the character's movements. Finally, interactive camera systems are partially automated and allow the player to directly change the view. To implement camera systems, video game developers use techniques such as constraint solvers, artificial intelligence scripts, or autonomous agents.

Contents

Third-person view

In video games, "third person" refers to a graphical perspective rendered from a fixed distance behind and slightly above the player character. This viewpoint allows players to see a more strongly characterized avatar, and is most common in action games and action adventure games. Games with this perspective often make use of positional audio, which the volume of ambient sounds varies depending on the position of the avatar.[1]

There are primarily three types of third-person camera systems: the "fixed camera systems" in which the camera positions are set during the game creation; the "tracking camera systems" in which the camera simply follows the player's character; and the "interactive camera systems" that are under the player's control.

Fixed

In this kind of system, the developers set the properties of the camera, such as its position, orientation or field of view, during the game creation. The camera views will not change dynamically, so the same place will always be shown under the same set of views. An early example of this kind of camera system can be seen in Alone in the Dark. While the characters are in 3D, the background on which they evolve has been pre-rendered. The early Resident Evil games are notable examples of games that use fixed cameras.

One advantage of this camera system is that it allows the game designers to use the language of film. Indeed, like filmmakers, they have the possibility to create a mood through camerawork and careful selection of shots. Games that use this kind of technique are often praised for their cinematic qualities.[2]

Tracking

As the name says, a tracking camera follows the characters from behind. The player does not control the camera in any way - he/she cannot for example rotate it or move it to a different position. This type of camera system was very common in early 3D games such as Crash Bandicoot or Tomb Raider since it is very simple to implement. However, there are a number of issues with it. In particular, if the current view is not suitable (either because it is occluded by an object, or because it is not showing what the player is interested in), it cannot be changed since the player does not control the camera.[3][4][5] Sometimes this viewpoint causes difficulty when a character turns or stands face out against a wall. The camera may jerk or end up in awkward positions.[1]

Implementation

There is a large body of research on how to implement a camera system[6] . The role of a constraint solver software is to generate the best possible shot given a set of visual constraints. In other words, the constraint solver is given a requested shot composition such as "show this character and ensure that he covers at least 30 percent of the screen space". The solver will then use various methods to try creating a shot that would satisfy this request. Once a suitable shot is found, the solver outputs the coordinates and rotation of the camera, which can then be used by the graphic engine renderer to display the view.[7]

In some camera systems, if no solution can be found, constraints are relaxed. For example, if the solver cannot generate a shot where the character occupies 30 percent of the screen space, it might ignore the screen space constraint and simply ensure that the character is visible at all.[8] Such methods include zooming out.

Some camera systems use predefined scripts to decide how to select the current shot. Typically, the script is going to be triggered as a result of an action. For instance, when the player's character initiates a conversation with another character, the "conversation" script is going to be triggered. This script will contain instructions on how to "shoot" a two-character conversation. Thus the shots will be a combination of, for instance, over the shoulder shots and close-up shots. Such script-based approaches usually rely on a constraint solver to generate the camera coordinates.[9]

Bill Tomlinson used a more original approach to the problem. He devised a system in which the camera is an autonomous agent with its own personality. The style of the shots and their rhythm will be affected by its mood. Thus a happy camera will "cut more frequently, spend more time in close-up shots, move with a bouncy, swooping motion, and brightly illuminate the scene".[10]

In mixed-reality applications

In 2010, the Kinect was released by Microsoft as a 3D scanner/webcam hybrid peripheral device which provides full-body detection of Xbox 360 players and hands-free control of the user interfaces of video games and other software on the console. This was later modified by Oliver Kreylos[11] of University of California, Davis in a series of YouTube videos which showed him combining the Kinect with a PC-based virtual camera[12]. Because the Kinect is capable of detecting a full range of depth (through computer stereo vision and Structured light) within a captured scene, Kreylos demonstrated the capacity of the Kinect and the virtual camera to allow free-viewpoint navigation of the range of depth, although the camera could only allow a video capture of the scene as shown to the front of the Kinect, resulting in fields of black, empty space where the camera was unable to capture video within the field of depth. Later, Kreylos demonstrated a further elaboration on the modification by combining the video streams of two Kinects in order to further enhance the video capture within the view of the virtual camera[13]. Kreylos' developments using the Kinect were covered among the works of others in the Kinect hacking and homebrew community in a New York Times article[14].

See also

References

  1. ^ a b Rollings, Andrew; Ernest Adams (2006). Fundamentals of Game Design. Prentice Hall. ISBN 0131687476, 9780131687479. http://wps.prenhall.com/bp_gamedev_1/54/14053/3597646.cw/index.html. 
  2. ^ Casamassina, Matt. "Resident Evil Review". IGN. http://uk.cube.ign.com/articles/358/358101p3.html. Retrieved 2009-03-22. 
  3. ^ "Sonic Adventure Review". IGN. http://uk.dreamcast.ign.com/articles/160/160140p1.html. Retrieved 2009-03-22. 
  4. ^ "Tomb Raider: The Last Revelation Review". IGN. http://uk.pc.ign.com/articles/162/162059p1.html. Retrieved 2009-03-22. 
  5. ^ Carle, Chris. "Enter the Matrix Review". IGN. http://uk.cube.ign.com/articles/403/403746p3.html. Retrieved 2009-03-22. 
  6. ^ "Cameracontrol.org: The virtual camera control bibliography". http://cameracontrol.org/blog/bibliography/. Retrieved 6 May 2011. 
  7. ^ Bares, William; Scott McDermott, Christina Boudreaux, Somying Thainimit (2000). "Virtual 3D camera composition from frame constraints". International Multimedia Conference (California, United States: Marina del Rey): 177–186. http://www.cacs.louisiana.edu/~sdm1718/papers/Virtual_3D_Camera_Composition_from_Frame_Constraints.pdf. Retrieved 2009-03-22. 
  8. ^ Drucker, Steven M.; David Zeltzer (1995). "CamDroid: A System for Implementing Intelligent Camera Control". Symposium on Interactive 3D Graphics. ISBN 0-89791-736-7. http://research.microsoft.com/en-us/um/people/sdrucker/papers/sig95symp.pdf. Retrieved 2009-03-22. 
  9. ^ He, Li-wei; Michael F. Cohen, David H. Salesin (1996). "The Virtual Cinematographer: A Paradigm for Automatic Real-Time Camera Control and Directing". International Conference on Computer Graphics and Interactive Techniques (New York) 23rd: 217–224. http://grail.cs.washington.edu/pub/papers/virtcine.pdf. Retrieved 2009-03-22. 
  10. ^ Tomlinson, Bill; Bruce Blumberg, Delphine Nain (2000). "Expressive Autonomous Cinematography for Interactive Virtual Environments". Proceedings of the fourth international conference on Autonomous agents (Barcelona, Spain) 4th. ISBN 1-58113-230-1. http://www.ics.uci.edu/~wmt/pubs/autonomousAgents00.pdf. Retrieved 2009-03-22. 
  11. ^ "Oliver Krelos' Homepage". http://idav.ucdavis.edu/~okreylos/index.html. 
  12. ^ Kevin Parrish (9:20 PM - November 17, 2010). "Kinect Used As 3D Video Capture Tool". Tom's Hardware. http://www.tomshardware.com/news/Kinect-Oliver-Kreylos-Hack-3D-Video-Capture-Hector-Martin,11640.html. 
  13. ^ Tim Stevens (Nov 29th 2010 1:21PM). "Two Kinects join forces to create better 3D video, blow our minds (video)". Engadget. http://www.engadget.com/2010/11/29/two-kinects-join-forces-to-create-better-3d-video-blow-our-mind/. 
  14. ^ Jenna Wortham (November 21, 2010). "With Kinect Controller, Hackers Take Liberties". New York Times. http://www.nytimes.com/2010/11/22/technology/22hack.html.